Serveur d'exploration Santé et pratique musicale

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Gaussian process inference modelling of dynamic robot control for expressive piano playing.

Identifieur interne : 000280 ( Main/Exploration ); précédent : 000279; suivant : 000281

Gaussian process inference modelling of dynamic robot control for expressive piano playing.

Auteurs : Luca Scimeca [Royaume-Uni] ; Cheryn Ng [Royaume-Uni] ; Fumiya Iida [Royaume-Uni]

Source :

RBID : pubmed:32797107

Descripteurs français

English descriptors

Abstract

Piano is a complex instrument, which humans learn to play after many years of practice. This paper investigates the complex dynamics of the embodied interactions between a human and piano, in order to gain insights into the nature of humans' physical dexterity and adaptability. In this context, the dynamic interactions become particularly crucial for delicate expressions, often present in advanced music pieces, which is the main focus of this paper. This paper hypothesises that the relationship between motor control for key-pressing and the generated sound is a manifold problem, with high-degrees of non-linearity in nature. We employ a minimalistic experimental platform based on a robotic arm equipped with a single elastic finger in order to systematically investigate the motor control and resulting outcome of piano sounds. The robot was programmed to run 3125 key-presses on a physical digital piano with varied control parameters. The obtained data was applied to a Gaussian Process (GP) inference modelling method, to train a network in terms of 10 playing styles, corresponding to different expressions generated by a Musical Instrument Digital Interface (MIDI). By analysing the robot control parameters and the output sounds, the relationship was confirmed to be highly nonlinear, especially when the rich expressions (such as a broad range of sound dynamics) were necessary. Furthermore this relationship was difficult and time consuming to learn with linear regression models, compared to the developed GP-based approach. The performance of the robot controller was also compared to that of an experienced human player. The analysis shows that the robot is able to generate sounds closer to humans' in some expressions, but requires additional investigations for others.

DOI: 10.1371/journal.pone.0237826
PubMed: 32797107
PubMed Central: PMC7428139


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Gaussian process inference modelling of dynamic robot control for expressive piano playing.</title>
<author>
<name sortKey="Scimeca, Luca" sort="Scimeca, Luca" uniqKey="Scimeca L" first="Luca" last="Scimeca">Luca Scimeca</name>
<affiliation wicri:level="1">
<nlm:affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge</wicri:regionArea>
<wicri:noRegion>Cambridge University Cambridge</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Ng, Cheryn" sort="Ng, Cheryn" uniqKey="Ng C" first="Cheryn" last="Ng">Cheryn Ng</name>
<affiliation wicri:level="1">
<nlm:affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge</wicri:regionArea>
<wicri:noRegion>Cambridge University Cambridge</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Iida, Fumiya" sort="Iida, Fumiya" uniqKey="Iida F" first="Fumiya" last="Iida">Fumiya Iida</name>
<affiliation wicri:level="1">
<nlm:affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge</wicri:regionArea>
<wicri:noRegion>Cambridge University Cambridge</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2020">2020</date>
<idno type="RBID">pubmed:32797107</idno>
<idno type="pmid">32797107</idno>
<idno type="doi">10.1371/journal.pone.0237826</idno>
<idno type="pmc">PMC7428139</idno>
<idno type="wicri:Area/Main/Corpus">000196</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Corpus" wicri:corpus="PubMed">000196</idno>
<idno type="wicri:Area/Main/Curation">000196</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Curation">000196</idno>
<idno type="wicri:Area/Main/Exploration">000196</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Gaussian process inference modelling of dynamic robot control for expressive piano playing.</title>
<author>
<name sortKey="Scimeca, Luca" sort="Scimeca, Luca" uniqKey="Scimeca L" first="Luca" last="Scimeca">Luca Scimeca</name>
<affiliation wicri:level="1">
<nlm:affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge</wicri:regionArea>
<wicri:noRegion>Cambridge University Cambridge</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Ng, Cheryn" sort="Ng, Cheryn" uniqKey="Ng C" first="Cheryn" last="Ng">Cheryn Ng</name>
<affiliation wicri:level="1">
<nlm:affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge</wicri:regionArea>
<wicri:noRegion>Cambridge University Cambridge</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Iida, Fumiya" sort="Iida, Fumiya" uniqKey="Iida F" first="Fumiya" last="Iida">Fumiya Iida</name>
<affiliation wicri:level="1">
<nlm:affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge</wicri:regionArea>
<wicri:noRegion>Cambridge University Cambridge</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PloS one</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2020" type="published">2020</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Feedback (MeSH)</term>
<term>Humans (MeSH)</term>
<term>Models, Theoretical (MeSH)</term>
<term>Music (MeSH)</term>
<term>Normal Distribution (MeSH)</term>
<term>Robotics (MeSH)</term>
<term>Sound (MeSH)</term>
</keywords>
<keywords scheme="KwdFr" xml:lang="fr">
<term>Humains (MeSH)</term>
<term>Loi normale (MeSH)</term>
<term>Modèles théoriques (MeSH)</term>
<term>Musique (MeSH)</term>
<term>Robotique (MeSH)</term>
<term>Rétroaction (MeSH)</term>
<term>Son (physique) (MeSH)</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Feedback</term>
<term>Humans</term>
<term>Models, Theoretical</term>
<term>Music</term>
<term>Normal Distribution</term>
<term>Robotics</term>
<term>Sound</term>
</keywords>
<keywords scheme="MESH" xml:lang="fr">
<term>Humains</term>
<term>Loi normale</term>
<term>Modèles théoriques</term>
<term>Musique</term>
<term>Robotique</term>
<term>Rétroaction</term>
<term>Son (physique)</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Piano is a complex instrument, which humans learn to play after many years of practice. This paper investigates the complex dynamics of the embodied interactions between a human and piano, in order to gain insights into the nature of humans' physical dexterity and adaptability. In this context, the dynamic interactions become particularly crucial for delicate expressions, often present in advanced music pieces, which is the main focus of this paper. This paper hypothesises that the relationship between motor control for key-pressing and the generated sound is a manifold problem, with high-degrees of non-linearity in nature. We employ a minimalistic experimental platform based on a robotic arm equipped with a single elastic finger in order to systematically investigate the motor control and resulting outcome of piano sounds. The robot was programmed to run 3125 key-presses on a physical digital piano with varied control parameters. The obtained data was applied to a Gaussian Process (GP) inference modelling method, to train a network in terms of 10 playing styles, corresponding to different expressions generated by a Musical Instrument Digital Interface (MIDI). By analysing the robot control parameters and the output sounds, the relationship was confirmed to be highly nonlinear, especially when the rich expressions (such as a broad range of sound dynamics) were necessary. Furthermore this relationship was difficult and time consuming to learn with linear regression models, compared to the developed GP-based approach. The performance of the robot controller was also compared to that of an experienced human player. The analysis shows that the robot is able to generate sounds closer to humans' in some expressions, but requires additional investigations for others.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="MEDLINE" Owner="NLM">
<PMID Version="1">32797107</PMID>
<DateCompleted>
<Year>2020</Year>
<Month>10</Month>
<Day>14</Day>
</DateCompleted>
<DateRevised>
<Year>2020</Year>
<Month>10</Month>
<Day>14</Day>
</DateRevised>
<Article PubModel="Electronic-eCollection">
<Journal>
<ISSN IssnType="Electronic">1932-6203</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>15</Volume>
<Issue>8</Issue>
<PubDate>
<Year>2020</Year>
</PubDate>
</JournalIssue>
<Title>PloS one</Title>
<ISOAbbreviation>PLoS One</ISOAbbreviation>
</Journal>
<ArticleTitle>Gaussian process inference modelling of dynamic robot control for expressive piano playing.</ArticleTitle>
<Pagination>
<MedlinePgn>e0237826</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1371/journal.pone.0237826</ELocationID>
<Abstract>
<AbstractText>Piano is a complex instrument, which humans learn to play after many years of practice. This paper investigates the complex dynamics of the embodied interactions between a human and piano, in order to gain insights into the nature of humans' physical dexterity and adaptability. In this context, the dynamic interactions become particularly crucial for delicate expressions, often present in advanced music pieces, which is the main focus of this paper. This paper hypothesises that the relationship between motor control for key-pressing and the generated sound is a manifold problem, with high-degrees of non-linearity in nature. We employ a minimalistic experimental platform based on a robotic arm equipped with a single elastic finger in order to systematically investigate the motor control and resulting outcome of piano sounds. The robot was programmed to run 3125 key-presses on a physical digital piano with varied control parameters. The obtained data was applied to a Gaussian Process (GP) inference modelling method, to train a network in terms of 10 playing styles, corresponding to different expressions generated by a Musical Instrument Digital Interface (MIDI). By analysing the robot control parameters and the output sounds, the relationship was confirmed to be highly nonlinear, especially when the rich expressions (such as a broad range of sound dynamics) were necessary. Furthermore this relationship was difficult and time consuming to learn with linear regression models, compared to the developed GP-based approach. The performance of the robot controller was also compared to that of an experienced human player. The analysis shows that the robot is able to generate sounds closer to humans' in some expressions, but requires additional investigations for others.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Scimeca</LastName>
<ForeName>Luca</ForeName>
<Initials>L</Initials>
<Identifier Source="ORCID">0000-0002-2821-0072</Identifier>
<AffiliationInfo>
<Affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Ng</LastName>
<ForeName>Cheryn</ForeName>
<Initials>C</Initials>
<AffiliationInfo>
<Affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Iida</LastName>
<ForeName>Fumiya</ForeName>
<Initials>F</Initials>
<AffiliationInfo>
<Affiliation>Bio-Inspired Robotics Lab., Dept. of Engineering, Cambridge University Cambridge, United Kingdom.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2020</Year>
<Month>08</Month>
<Day>14</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>PLoS One</MedlineTA>
<NlmUniqueID>101285081</NlmUniqueID>
<ISSNLinking>1932-6203</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName UI="D005246" MajorTopicYN="N">Feedback</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D006801" MajorTopicYN="N">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D008962" MajorTopicYN="Y">Models, Theoretical</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009146" MajorTopicYN="Y">Music</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D016011" MajorTopicYN="N">Normal Distribution</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D012371" MajorTopicYN="Y">Robotics</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D013016" MajorTopicYN="N">Sound</DescriptorName>
</MeshHeading>
</MeshHeadingList>
<CoiStatement>The authors have declared that no competing interests exist.</CoiStatement>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2020</Year>
<Month>05</Month>
<Day>01</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2020</Year>
<Month>08</Month>
<Day>03</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2020</Year>
<Month>8</Month>
<Day>16</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2020</Year>
<Month>8</Month>
<Day>17</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2020</Year>
<Month>10</Month>
<Day>21</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>epublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">32797107</ArticleId>
<ArticleId IdType="doi">10.1371/journal.pone.0237826</ArticleId>
<ArticleId IdType="pii">PONE-D-20-12759</ArticleId>
<ArticleId IdType="pmc">PMC7428139</ArticleId>
</ArticleIdList>
<ReferenceList>
<Reference>
<Citation>Trends Cogn Sci. 2006 Jul;10(7):319-26</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">16807063</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Bioinspir Biomim. 2016 Feb 18;11(2):026001</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">26891473</ArticleId>
</ArticleIdList>
</Reference>
</ReferenceList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Scimeca, Luca" sort="Scimeca, Luca" uniqKey="Scimeca L" first="Luca" last="Scimeca">Luca Scimeca</name>
</noRegion>
<name sortKey="Iida, Fumiya" sort="Iida, Fumiya" uniqKey="Iida F" first="Fumiya" last="Iida">Fumiya Iida</name>
<name sortKey="Ng, Cheryn" sort="Ng, Cheryn" uniqKey="Ng C" first="Cheryn" last="Ng">Cheryn Ng</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Sante/explor/SanteMusiqueV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000280 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000280 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Sante
   |area=    SanteMusiqueV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:32797107
   |texte=   Gaussian process inference modelling of dynamic robot control for expressive piano playing.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:32797107" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a SanteMusiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Mon Mar 8 15:23:44 2021. Site generation: Mon Mar 8 15:23:58 2021